Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add inline vLLM inference provider to regression tests and fix regressions #662

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

frreiss
Copy link

@frreiss frreiss commented Dec 19, 2024

What does this PR do?

This PR adds the inline vLLM inference provider to the regression tests for inference providers. The PR also fixes some regressions in that inference provider in order to make the tests pass.

Test Plan

Command to run the new tests (from root of project):

pytest \
    -vvv \
    llama_stack/providers/tests/inference/test_text_inference.py \
    --providers inference=vllm \
    --inference-model meta-llama/Llama-3.2-3B-Instruct \

Output of the above command after these changes:

/mnt/datadisk1/freiss/llama/env/lib/python3.12/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"

  warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
=================================================================== test session starts ===================================================================
platform linux -- Python 3.12.7, pytest-8.3.4, pluggy-1.5.0 -- /mnt/datadisk1/freiss/llama/env/bin/python3.12
cachedir: .pytest_cache
rootdir: /mnt/datadisk1/freiss/llama/llama-stack
configfile: pyproject.toml
plugins: asyncio-0.25.0, anyio-4.6.2.post1
asyncio: mode=Mode.STRICT, asyncio_default_fixture_loop_scope=None
collected 9 items                                                                                                                                         

llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_model_list[-vllm] PASSED                                          [ 11%]
llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_completion[-vllm] SKIPPED (Other inference providers don't
support completion() yet)                                                                                                                           [ 22%]
llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_completion_logprobs[-vllm] SKIPPED (Other inference providers
don't support completion() yet)                                                                                                                     [ 33%]
llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_completion_structured_output[-vllm] SKIPPED (This test is not
quite robust)                                                                                                                                       [ 44%]
llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_chat_completion_non_streaming[-vllm] PASSED                       [ 55%]
llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_structured_output[-vllm] SKIPPED (Other inference providers don't
support structured output yet)                                                                                                                      [ 66%]
llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_chat_completion_streaming[-vllm] PASSED                           [ 77%]
llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_chat_completion_with_tool_calling[-vllm] PASSED                   [ 88%]
llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_chat_completion_with_tool_calling_streaming[-vllm] PASSED         [100%]

======================================================== 5 passed, 4 skipped, 2 warnings in 25.56s ========================================================
Task was destroyed but it is pending!
task: <Task pending name='Task-6' coro=<AsyncLLMEngine.run_engine_loop() running at /mnt/datadisk1/freiss/llama/env/lib/python3.12/site-packages/vllm/engine/async_llm_engine.py:848> cb=[_log_task_completion(error_callback=<bound method...7cfc479440b0>>)() at /mnt/datadisk1/freiss/llama/env/lib/python3.12/site-packages/vllm/engine/async_llm_engine.py:45, shield.<locals>._inner_done_callback() at /mnt/datadisk1/freiss/llama/env/lib/python3.12/asyncio/tasks.py:905]>
[rank0]:[W1219 11:38:34.689424319 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present,  but this warning has only been added since PyTorch 2.4 (function operator())

The warning about "asyncio_default_fixture_loop_scope" appears to be due to my environment having a newer version of pytest-asyncio.

The warning about a pending task appears to be due to a bug in vllm.AsyncLLMEngine.shutdown_background_loop(). It looks like that method returns without stopping a pending task. I will look into that issue separately.

Sources

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Ran pre-commit to handle lint / formatting issues.
  • Read the contributor guideline,
    Pull Request section?
  • Updated relevant documentation.
  • Wrote necessary unit or integration tests.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Dec 19, 2024
@@ -67,7 +67,9 @@ def sample_tool_definition():


class TestInference:
@pytest.mark.asyncio
# Session scope for asyncio because the tests in this class all
# share the same provider instance.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@frreiss can you explain a bit what this change does (or rather not having this causes)?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, happy to explain.

The fixtures in llama_stack/providers/tests/inference/fixtures.py are all tagged with @pytest.fixture(scope="session"). This tag means that these fixtures are initialized once, then reused for the rest of the test session. Reusing the fixtures this way makes the tests run faster. The fixtures for inline providers need to load models from disk.

The default behavior of the pytest-asyncio plugin is to spawn a new ayncio event loop for every test case.

The result of these two different scoping policies is that the fixtures are being initialized under one event loop, then the test cases interact with the fixtures from a different event loop (one event loop per test case). This change of event loops happens not to break the existing tests, because the inference providers they exercise are stateless at the asyncio layer.

vLLM is not stateless at the asyncio layer. An idle vLLM instance has an event handler waiting for new inference requests to be added to the current batch. Switching to a different event loop drops this event handler, preventing vLLM's batching mechanism from functioning. When I added the inline vLLM provider to the test cases, the change in event loops caused the inference tests to hang.

Adding @pytest.mark.asyncio(loop_scope="session") to a test case prevents pytest-asyncio from switching event loops when running the test case. This change ensures that the asyncio event loop used during the test case is the same as the event loop that was present when any session-scoped fixtures were initialized.

The primary potential downside of scoping the event loop in this way is that, if a misbehaving test case were to leave orphan event handlers, those event handlers could cause errors in later test cases instead of causing an error in the misbehaving test case. This risk seemed acceptable to me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants